282 research outputs found

    March CRF: an Efficient Test for Complex Read Faults in SRAM Memories

    No full text
    In this paper we study Complex Read Faults in SRAMs, a combination of various malfunctions that affect the read operation in nanoscale memories. All the memory elements involved in the read operation are studied, underlining the causes of the realistic faults concerning this operation. The requirements to cover these fault models are given. We show that the different causes of read failure are independent and may coexist in nanoscale SRAMs, summing their effects and provoking Complex Read Faults, CRFs. We show that the test methodology to cover this new read faults consists in test patterns that match the requirements to cover all the different simple read fault models. We propose a low complexity (?2N) test, March CRF, that covers effectively all the realistic Complex Read Fault

    Minimizing Test Power in SRAM through Reduction of Pre-charge Activity

    No full text
    In this paper we analyze the test power of SRAM memories and demonstrate that the full functional pre-charge activity is not necessary during test mode because of the predictable addressing sequence. We exploit this observation in order to minimize power dissipation during test by eliminating the unnecessary power consumption associated with the pre-charge activity. This is achieved through a modified pre-charge control circuitry, exploiting the first degree of freedom of March tests, which allows choosing a specific addressing sequence. The efficiency of the proposed solution is validated through extensive Spice simulations

    Rule Managed Reporting in Energy Controlled Wireless Sensor Networks

    No full text
    This paper proposes a technique to extend the network lifetime of a wireless sensor network, whereby each sensor node decides its network involvement, based on energy resources and the information in each message (ascertained through a system of rules). Results obtained from the simulation of an industrial monitoring scenario have shown that a considerable increase in the lifetime and connectivity can be obtained

    Energy-Driven Computing: Rethinking the Design of Energy Harvesting Systems

    No full text
    Energy harvesting computing has been gaining increasing traction over the past decade, fueled by technological developments and rising demand for autonomous and battery-free systems. Energy harvesting introduces numerous challenges to embedded systems but, arguably the greatest, is the required transition from an energy source that typically provides virtually unlimited power for a reasonable period of time until it becomes exhausted, to a power source that is highly unpredictable and dynamic (both spatially and temporally, and with a range spanning many orders of magnitude). The typical approach to overcome this is the addition of intermediate energy storage/buffering to smooth out the temporal dynamics of both power supply and consumption. This has the advantage that, if correctly sized, the system ‘looks like’ a battery-powered system; however, it also adds volume, mass, cost and complexity and, if not sized correctly, unreliability. In this paper, we consider energy-driven computing, where systems are designed from the outset to operate from an energy harvesting source. Such systems typically contain little or no additional energy storage (instead relying on tiny parasitic and decoupling capacitance), alleviating the aforementioned issues. Examples of energy-driven computing include transient systems (which power down when the supply disappears and efficiently continue execution when it returns) and power-neutral systems (which operate directly from the instantaneous power harvested, gracefully modulating their consumption and performance to match the supply). In this paper, we introduce a taxonomy of energy-driven computing, articulating how power-neutral, transient, and energy-driven systems present a different class of computing to conventional approaches

    NBTI and leakage aware sleep transistor design for reliable and energy efficient power gating

    Get PDF
    In this paper we show that power gating techniques become more effective during their lifetime, since the aging of sleep transistors (STs) due to negative bias temperature instability (NBTI) drastically reduces leakage power. Based on this property, we propose an NBTI and leakage aware ST design method for reliable and energy efficient power gating. Through SPICE simulations, we show lifetime extension up to 19.9x and average leakage power reduction up to 14.4% compared to standard STs design approach without additional area overhead. Finally, when a maximum 10-year lifetime target is considered, we show that the proposed method allows multiple beneficial options compared to a standard STs design method: either to improve circuit operating frequency up to 9.53% or to reduce ST area overhead up to 18.4%

    Diagnosis of power switches with power-distribution-network consideration

    Get PDF
    This paper examines diagnosis of power switches when the power-distribution-network (PDN) is considered as a high resolution distributed electrical model. The analysis shows that for a diagnosis method to perform high diagnosis accuracy and resolution, the distributed nature of PDN should not be simplified by a lumped model. For this reason, a PDN-aware diagnosis method for power switches fault grading is proposed. The proposed method utilizes a novel signature generation design-for-testability (DFT) unit, the signatures of which are processed by a novel diagnosis algorithm that grades the magnitude of faults. Through simulations of physical layout SPICE models, we explore the trade-offs of the proposed method between diagnosis accuracy and diagnosis resolution against area overhead and we show that 100% diagnosis accuracy and up to 98% diagnosis resolution can be achieved with negligible cost

    Machine Learning for Run-Time Energy Optimisation in Many-Core Systems

    No full text
    In recent years, the focus of computing has moved away from performance-centric serial computation to energy-efficient parallel computation. This necessitates run-time optimisation techniques to address the dynamic resource requirements of different applications on many-core architectures. In this paper, we report on intelligent run-time algorithms which have been experimentally validated for managing energy and application performance in many-core embedded system. The algorithms are underpinned by a cross-layer system approach where the hardware, system software and application layers work together to optimise the energy-performance trade-off. Algorithm development is motivated by the biological process of how a human brain (acting as an agent) interacts with the external environment (system) changing their respective states over time. This leads to a pay-off for the action taken, and the agent eventually learns to take the optimal/best decisions in future. In particular, our online approach uses a model-free reinforcement learning algorithm that suitably selects the appropriate voltage-frequency scaling based on workload prediction to meet the applications’ performance requirements and achieve energy savings of up to 16% in comparison to state-of-the-art-techniques, when tested on four ARM A15 cores of an ODROID-XU3 platform

    A Fast and Accurate Process Variation-Aware Modeling Technique for Resistive Bridge Defects

    Full text link

    Sequence-aware watermark design for soft IP embedded processors

    Get PDF

    Visualizing spatially correlated dynamics that directs RNA conformational transitions

    Full text link
    RNAs fold into three- dimensional ( 3D) structures that subsequently undergo large, functionally important, conformational transitions in response to a variety of cellular signals(1-3). RNA structures are believed to encode spatially tuned flexibility that can direct transitions along specific conformational pathways(4,5). However, this hypothesis has proved difficult to examine directly because atomic movements in complex biomolecules cannot be visualized in 3D by using current experimental methods. Here we report the successful implementation of a strategy using NMR that has allowed us to visualize, with complete 3D rotational sensitivity, the dynamics between two RNA helices that are linked by a functionally important trinucleotide bulge over timescales extending up to milliseconds. The key to our approach is to anchor NMR frames of reference onto each helix and thereby directly measure their dynamics, one relative to the other, using 'relativistic' sets of residual dipolar couplings ( RDCs)(6,7). Using this approach, we uncovered super- large amplitude helix motions that trace out a surprisingly structured and spatially correlated 3D dynamic trajectory. The two helices twist around their individual axes by approximately 536 and 1106 in a highly correlated manner ( R = 0.97) while simultaneously ( R = 0.81 - 0.92) bending by about 94 degrees. Remarkably, the 3D dynamic trajectory is dotted at various positions by seven distinct ligand- bound conformations of the RNA. Thus even partly unstructured RNAs can undergo structured dynamics that directs ligand- induced transitions along specific predefined conformational pathways.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/62506/1/nature06389.pd
    • 

    corecore